101Innovationslogo
The VU University Library wants her services to be closely connected to the scholarly communication processes of the VU researcher.
The way researchers share and claim their findings (scholarly communicatication) is changing under the influene of the application of new communication technologies. int he digital era, the medium has changes, and influences the way scholars communicate: new ways to discover the latest scientific information, setup faster analyses designed for reproducibility, write collaboratively, publish articles and reusable data in new ways on the internet, do the outreach of findings in more then one channel, have the assessement of findings done in different ways.
This project investigates the use of digital tools used in scholarly communication within the VU and VUmc. By pooling in with a global research from Utrecht Univerisity by Kramer and Bosman [1], we can compare our results with other countries.
[1] Kramer, B. and J. Bosman, Innovations in scholarly communication - global survey on research tool usage [version 1; referees: awaiting peer review]. F1000Research 2016, 5:692 (doi: 10.12688/f1000research.8414.1)
The results in this report are ment as conversational material with the faculties to improve library services, in such way they are better improving the work of researchers in their activities in scholarly communication.
To bring tools for advancing scholarly communication our selves, we added web anotation to this report using Hypothes.is As a reader you can also can make public and private annotations on this URL of this report. (See top-right corner) You can start highlighting the content in this document and save your thoughts for later, or to share with others, right from the browser as you read along. We do encourage you to do so. (In case you save the html file, be aware that web annotations are based on the web address, if the address changes, the annotations will not move with it.)
To start, we have to realise that the survey design was a given to us. We were not involved from the start, but joint this research project from the Utrecht University when it was well underway. The goal of the researchers Kramer and Bosman was to gain insign in what digital online tools were used during different activities of scholarly communication.
Our goal is to improve library services, so we first asked library employees which answers they would like to get from the survey data, devided in two parts: 1. Tool usage of the VU and VUmc researchers as a monolithic group, with comparissons to other parts of the world. 2. Tool usage of the different disciplines of VU and VUmc, also with comparissons to other parts of the world.
All these questions can be found in this document containing the VU101 research questions. You can also fnid here the prioritation of what questions should be treated first, and the others if time was left for further research.
Next, we need VU and VUmc researchers to fill in the survey. For this we needed a method to filter these researchers from the data, and we needed to communicate the survey to this specific target group.
For filtering VU and VUmc researchers out of the survey data, we asked the lead researchers to add a hash [7V4u8a] to the survey URL. This custom url can be used in communicating the survey. To monitor the survey activities in realtime ourselves, we created a short URL from the custom URL: [http://bit.ly/vu101innovations] . This url is used in communication to VU and VUmc researchers.
To communicate this survey to this specific targetgroup within the VU and VUmc, we asked the research portfolio managers to distribute this survey in an e-mail from the faculty secretaries. This resulted in 543 visitors in January, and an extra of 296 after a reminder in February, as you can see in the diagram below.
Web activity of the VU101survey: Number of visits; 543 in January, 296 in February.
From a population of 3772 people academic staff at VU and VUmc; we had in total 839 visits (22%) to this short URL. As you can see in the results section, 531 people (14%) finaly filled in the survey.
In April we got the anonymised and cleaned (eg. removed whitespaces) data from the lead researchers. One set with all the data, and one with only respondents from the VU and VUmc. We prepared the data for making plots by the following procedure; changing collumns representng a tool from a string value to a bolean value, next adding a boolean to the tool column where the name of the tool has been written down in the free text field for that respondent. Then we added boolean values to the respondents who are affiliated to an institution in a OECD country. After that we marked which column belongs to what research phase in the scolarly communication cycle (Discovery, Analysis, Writing, Publication, Outreach, Assessment), and within these phases what columns represent what reearch activity (eg. searching literature, writing collaboratively, sharing data, …) After that we labeled what tool are supported by the VU library. Extra information from the tool database (eg. tool age, twitter followers and open science category) could be taken into account for making additional correllations, but due to time limitation we left this out of the answers for now.
Data Scaffold: survey dataset enriched with additional data for making plots.
We chose R as the language to make plots, and the report in this HTML file. For each question we have made a plot in a separate R file to ease the procedure of collaboration, code-commenting and debugging. Then we nested this plot in the R-Markdown (.rdm) format where we could add commenting text like you read here now. All .rdm files are nested in a master report. This allowed us to make a report that immediately updates the plots when the code changes, keeping the graphs in sync with the latest code and data. In the end it saved us time, and makes the repoducibility of this report as easy as possible.
Code Scaffold: From data to Report; making separate plots from data, put plots in each question, include commentary, and placing each question in the report.
Below you can find the results. The result-sections are based on the research questions. In each section you can find the plots with some commentary information explaining what the plot is about, and what results are expected and unexpected.
Throughout the results the Scholarly Communication Phases will return in each section as seperate paragraphs for Discovery, Analysis, Writing, Publication, Outreach and Assessment.
Also the plots will contain the same colors describing the Scholarly Communication Phases.
Scholarly Communication Phases: Discovery, Analysis, Writing, Publication, Outreach and Assessment
This report has the following result sections:
These demographics form the baseline of our study.
| Number of respondents | Value |
|---|---|
| World Wide | 20663 |
| OECD countries | 15752 |
| Netherlands | 2041 |
| VU and VUmc | 531 |
## NULL
The values below are within the set of VU & VUmc respondents.
| Discipline (multi-choice) | Value |
|---|---|
| Physical Sciences | 39 |
| Engineering and Technology | 35 |
| Life Sciences | 144 |
| Medicine | 181 |
| Social Sciences and Economics | 176 |
| Law | 26 |
| Arts & Humanities | 55 |
## NULL
| Role | Value |
|---|---|
| Number of PhD’s | 230 |
| Number of PostDoc’s | 70 |
| Number of (Associate, Assistant) Professors | 188 |
## NULL
| First publication year | Value |
|---|---|
| before 1991 | 61 |
| 1991-2000 | 70 |
| 2001-2005 | 55 |
| 2006-2010 | 79 |
| 2011-2016 | 168 |
| not published (yet) | 96 |
## NULL
| Country of affiliation | Value |
|---|---|
| Netherlands | 519 |
| United States | 3 |
| Germany | 2 |
| Brazil | 1 |
| DR of Congo | 1 |
| India | 1 |
| Italy | 1 |
| Latvia | 1 |
| Turkey | 1 |
## NULL
Below the numbers are given for the active scientific personel on 30th of June 2016 for the VU. For VUmc (Medicine), numbers from annual report 2015 are used.
| Faculty | Number of scientific personnel |
|---|---|
| Theology (Godgeleerdheid) (1) | 86 |
| Humanities (Geesteswetenschappen) (2) | 221 |
| Law (Rechtsgeleerdheid) (3) | 219 |
| Social Sciences (Sociale Wetenschappen) (4) | 224 |
| Economics and Business Administration (Economische Wetenschappen en Bedrijfskunde) (5) | 430 |
| Sciences (Exacte Wetenschappen) (6) | 390 |
| Earth and Life Sciences (Aard- en Levenswetenschappen) (7) | 450 |
| Behavioural and Movement Sciences (Gedrags- en Bewegingswetenschappen) (8) | 422 |
| Medicine (Geneeskunde) (VUmc) (9) | 1079 |
| Dentistry (Tandheelkunde) (ACTA) (10) | 251 |
## NULL
In this table we put the number of respondents from each discipline next to the number of academic staff of each faculty. To know the response rate for each discipline, we need to know the number of VU respondents for each disipline and the total number of VU potential respondents that could have filled out the survey.
For the first number we count the numbers of disciplines in VU respondents data. For the potential total response size for each discipline, we try to match the discipline from the survey to the faculties. This way we get an impression of the response rate of each faculty.
We realise that the seperation and unifications are made some what artificial, and we have to remind the reader that one respondent could select multiple disciplines.
To know the total number of academic staff that could have filled out the survey for that discipline, we used the number of VU scientific personel to represent a survey discipline is calculated by splitting or joining the number of academic staff at a faculty.
We split the number of the Science faculty to respresent the disciplines of Physical sciences and Tecnology and Engineering. And we joined the faculties Behavioural and Movement Sciences AND Earth and Lifesciences to represent the respondents for the discipline Life Sciences. Also we joined the faculties Medicine AND Dentistry to represent respondents from Medicine. Faculties Social Sciences AND Economics and Business Administration were joined to represent respondents from Social Sciences and Economics. Also the faculties Theology AND Humanities were joined to represent the respondents from Arts & Humanities.
The percentage of responses are calculated by relating the number of VU respondents on the survey for each discipline, to the total number of academic staff that could have filled out the survey for that discipline.
| Survey Discipline | Number of VU respondents (from survey) | Faculty | Number of VU scientific personnel (calculated potential) | | | % of response |
|---|---|---|---|---|---|
| Physical Sciences | 39 | Sciences (0.5) (1) | 195 | | | 20% |
| Engineering and Technology | 35 | Sciences (0.5) (2) | 195 | | | 18% |
| Life Sciences | 144 | Behavioural and Movement Sciences AND Earth and Lifesciences (3) | 872 | | | 17% |
| Medicine | 188 | Medicine AND Dentistry (ACTA) (4) | 1330 | | | 14% |
| Social Sciences and Economics | 176 | Social Sciences AND Economics and Business Administration (5) | 654 | | | 27% |
| Law | 26 | Law (6) | 219 | | | 12% |
| Arts & Humanities | 55 | Theology AND Humanities (7) | 307 | | | 18% |
## NULL
The charts below show which tools are used most often, as indicated by the responses in the survey. The first set of charts shows a quick summary of the the three most used tools by research phase. The second set gives more detail, and displays usage of all available tools in the different research phases. Bars with a solid fill indicate that the tool is supported by the Library, hatched bars indicate no support. On the public Library website you can find a complete list of the Library Products and Services, also more information on Library services and prices can be found on VU intranet.
Acrobat Reader, Google Scholar, and Institutional Access are each used by nearly VU respondents. Looking at the more detailed figures showing all tools per research activity, it becomes clear that these tools are not direct competitors: each are leading tools in different research activities. Acrobat Reader is a very popular tool for reading, Google Scholar for searching literature, and Institutional Access to gain access to academic literature.
Excel is most widely used within the VU, with SPSS a close second, and R a more distant third. These tools fall within the same activity class, and could be considered rival tools. This is only partially true: closer inspection shows that nearly 80 per cent of SPSS users indicate that they also use Excel, and about 66 per cent of Excel users also use SPSS. Users of R also use Excel often, although the opposite is not true. SPSS users are unlikely to use R as well, although many R users do analysis in SPSS as well. It is notable that very few reseachers use tools to share there analysis (and, presumably, do not share their analysis). t would be interesting to see how this evolves in the future.
Unsurprisingly, Microsoft Word is the most popular writing tool by far, and nearly all respondents indicate that they use Word. Nearly all non-Word-users write in LaTeX or Overleaf (which is web-base LaTeX). Google Docs is used in conjunction with Word in 26% of cases; for most others, Word is the only tool for writing. The second and third most popular tools in the Writing category are reference managers: Endnote and Mendeley. Endnote is supported by the Library, but Mendeley has only slightly fewer users. Endnote and Mendely users really are separate categories: very few researchers use both. More than 30 per cent of respondents do not indicate using a reference manager at all.
Unsurprisingly, the traditional topical journal (ie, ‘closed’ access) is still the leading outlet for academic work; more than eighty per cent of respondents with at least one publication indicate they have published in such a journal. We see no indications that researchers make a very strict choice for either traditional or Open Access outlets, as most respondents indicate having published in both types. Note that ResearchGate is seen as a tool for archival, rather than publication. From the point of view of the Library, it could be important to make more clear the benefits of the institutional repository. Of course, suing the VU repository does not preclude using ResearchGate as well.
Generally, outreach to other academics is more popular than outreach to a broader public. ResearchGate and Google Scholar profiles are the most popular tools in this category. Twitter, and to a lesser extent WordPress, are used as tools for outreach to a broader public. Very few researchers make their presentations available for others.
In the assessment phase, researchers mostly use Thomson reuters’ Web of Science Journal Citation Reports to measure impact. Scopus is used as well, but much less often, presumably because the VU does not offer access to it. Alternative (‘open’) ways of peer review have very few users.
The 101 Innovations survey received responses from many different countries. This makes it possible to compare responses from VU & VUmc faculty with international averages. We limit the comparison to the 34 OECD member states (checked 3 May 2016; note that the list also includes the Netherlands). Although this list comprises many different countries, responses from these countries are more comparable to the VU than those from the complete list, and allow for a more meaningful comparison. For example, respondents from countries with low GDP often use Zotero (free of charge), while EndNote (paid) is used more in countries with a higher GDP.
In the subsections below, we discuss the different research phases in turn. In each of the figures, hatched bars represent responses from VU and VUmc staff, and filled bars represent all other respondents from OECD countries. All data is represented as the percentage of all respondents in that group. For example, in the graph for Discovery_search it is shown that about forty percent of all OECD respondents have indicated using PubMed for Search, and about fifty percent of all VU&VUmc staff.
Overall, differences between OECD and VU repondents are not very large, and it is difficult to discern structural deviations at the VU from the OECD averages. One thing that is noteworthy and of possible interest to the Library is that usage of Mendeley for discovery and reference management is higher at the VU than in other OECD countries, despite access to the Endnote application.
The triplets of VU and VUmc favourites—Acrobat Reader, Google Scholar, institutional access— are popular in the OECD as well. For searching, PubMed and Mendeley are used more often by VU staff, while Scopus is used less (no surprise here, since the VU has no subscription at the time of the survey).
Use of SPSS as a tool for analysis is almost twice as large at the VU than for the OECD average.
Mendeley users for reference management are strongly represented at the VU. The preference for Mendeley is at the expense of all other tools except Endnote. For writing, VU respondents are relatively traditional, with high usage of MS Word and low usage of Google Docs and LaTeX.
Scopus usage is relatively low. Few VU respondents use the institutional repository for archival.
There are no clear differences between the VU or OECD averages. VU respondents seem to use more tools for outreach to the general public (mainly through Twitter, WordPress, and the Wikipedia), but differences are not substantial
…
In this section, we report on differences in tool usage between tenured and non-tenured researchers. We consider assistant professors, associate professors and full professors as tenured faculty; PhD students and postdoctoral researchers are grouped as non-tenured.
The first set of graphs is a quick summary with the tools that show the most pronounced differences between the two groups. We calculate the difference by substracting the use in the tenured group from the use in the non-tenured group (both as percentages). The upper bars show the largest positive difference (ie, the tool is more popular among non-tenured researchers); the lower bars show the largest negative difference.
The second set of graphs below we show the most pronounced differences to the far right and left of each diagram. Here we see all tools in the survey sorted by research phase and research activity. We calculate the difference by substracting the use in the tenured group from the use in the non-tenured group (both as percentages). The bars on the far-right show the largest positive difference (ie, the tool is more popular among non-tenured researchers); the bars on the far-left show the largest negative difference (ie, the tool is more popular among tenured researchers).
The difference in use for PubMed and table of content announcements for journals stand out as the most significant discrepancies in the Discovery phase. Although not featuring in the ‘top-2’ figures, the use of Mendely stands out when inspecting the more detailed graphs: non-tenured (generally younger) researchers use Mendeley more often in the Reading, Searching and Alert activities within the Discovery phase.
Tool use for analysis is stronger with non-tenured researchers across the board. This holds for relatively new (and more open) tools such as R and Python, as well as for long-standing software such as Excel and MATLAB. The large difference for SPSS is no outlier. Tools for sharing analysis scripts are not very popular, and tool usage is low overall. Somewhat unexpectedly, use of the Open Science Framework is stronger for the tenured then for the non-tenured group. This could have to do with some cases where that the OSF is often used for grant applications, and that this arguably is a more important activity for tenured researchers.
The importance of Mendeley in the research workflow of non-tenured researchers is again apparent in the Writing Phase. Among this group, Mendeley is the most popular reference management software, more popular than Endnote—the most popular reference tool for tenured researchers. For the writing itself, MS Word is by far the most popular tool among both groups.
In general, tenured researchers use more tools in the Publication phase; probably they simply publish more. This makes it difficult to interpret these figures properly. A few tools stand out. First, PubMed is relatively popular for archival of publications for non-tenured researchers, although in absolute terms ResearchGate is the most popular repository for both groups. GitHub is used mostly by non-tenured researchers as a repository for scripts and software code.
Tenured researchers seem to spend more effort on their research profile, as tool use in this phase is higher for that group. ResearchGate is popular among both groups. Although to a lesser extent (differences are less pronounced), tenured researchers also use more tools for outrecach to a broader public.
The difference is use of Web of Science indicators for impact assessment is striking: about 55% of tenured researchers indicate using the tool, versus appraximately 20% of non-tenured reseachers. Altmetrics and the PLoS metrics are not very popular (yet) in comparison, and are used by both groups, although slightly more by non-tenured researchers.
Despite of people are not always familiar with the tools that they can use for engaging in Open Science, they overall have the intent to be supportive in favor of Open Acces an Open Science.
Apart from the multiple-choise questions, there was one open question:
“What do you think will be the most important development in scholarly communication in the coming years?”
This question got a tremendous response (N=341; 64%) , which give a better view on their positon on their worries and hopes on the future of conducting science. The answers shows what the directions are academic support could invest in.
What we did with the free text answers is; First we labeled all answers if they express a threat or opportunity, then we filtered to all senior researchers (N=188; 35%) to get a managable sample, and last we labelled the answers on which research phase and activity the answer was related to. This way we hope to get a perception of their worries and hopes for the future of Scholarly Communication.
The professors expressed their worries (N=6, 1%) for the following, that The number of journals and locations where publications can be found will increase, and if journals disappear in cases where knowledge is democratised in a wikipedia fashion, it is hard to trust or distinguish quality of a scientific paper or fact. (Related to Discovery-Search and Assessment-Review) In the realm of communicating through journals as the only option for publishing, we see that on one hand there are worries about open access journals who ask to high of a price to publish, and on the other hand there are worries about the quality of open access journals, not only to read, but also to publish. (Related to Discovery-Access and Publication-Submit) In other words, the current perception is that there lacks a system that can create a market to increase the quality and trustworthiness of open access journals and reduce the price for publishing. (DOAJ.org and QOAM.eu are means for making quality of open access journals transparent.)
The hopes the professors (N=53, 10%) expressed that the future of scholarly communication should bring were the following for the different research phases:
As a researcher one needs to know what is the latest on your field, but do not need to be swamped in irrelevant information, and once a relevant paper is found there must be the ability to read it without restriction. The ultimate hope is that there is no need to read through all the papers but to get alerted to only the latest and necessary information for your current research domain, from the complete scientific corpus, open accessible for knowledge aggregators, and human provenance readers. But in terms of more traditional scholarly communication, there is a hope for more clarity on the quality of the scientific content, so that the few papers that remaining in the scientific corpus are more high in quality, or at least can be filtered on quality standards. When found there is the hope that one day there will be no need to pay and gain free access to all that scientific content. Once there is access to a paper there is a hope one can engage with more interaction to the content - see for example readcube.com, utopiadocs.com - and interaction with other readers - see for example hypothes.is -
Search
Researchgate as primary tool
Access
Researchgate as primary tool
Read
tools that aggregate scattered knowledge in papers;
When designing doing the operation of the research, the hopes are that there will be more emphasis on replication by working in concurrence, preregistering protocols and experiment designs. See for example osf.io, protocols.io, scientificprotocols
Also the hope is that working in concurrence forces to deliver improved descriptions of methods and increase the quality of the data. The hope is expressed that this will be required for any valid publication.
Research is so diverse and heterogeneous, there is no one tool to nail the job, but the scientific endeavor needs different tools for different research groups in different phases. To excel in science, the hope is that the university will apply ‘super segmentation’, to give researchers access to the best tools for the right task, at the right moment. The 101 research tools data base could be a start for this.
Sharing
Publication based on preregistration of protocols/experiments
Analyzing
One important concept in marketing is the idea of “super segmentation”. Lots of excellent tools are coming available, but they are differentials relevant for different people, and different people need to integrate them. It will become a challenge how to integrate that most optimally. My suspicion is that the university- slow as it is- won’t be able to handle this development. So either the universities adapt and let people operate in start-up like enterprises within the university, or good researchers break away from universities
The hopes for scholarly communication in the writing phase is that researchers hope for more responsive communication, without losing credit for their contributions. Writing could be done in smaller iterations, where the feedback helps in building up towards completing a milestone. This is like contributions in the open source software industry where working with nightly-build and milestone releases is common practice.
One example of a platform that are built with responsive communication, version control, collaborative editing for mixed teams with LaTeX and rich-text writers, and commenting during drafting or after publishing, is overleaf.com
Also the hopes are to get support in making complex material easier to fathom by visualisations and animations.
Write
Supporting written material by animation.
Hopes are that for deciding where to submit a paper other indicators than the journal impact factor will be considered like: author’s right to open access the publication, the publication limit or APC budgets, quality factors of a journal - see qoam.eu and doaj.org - , data appliance policy, or even the necessity of publishing with a journal as intermediate can be questioned.
For publishing text hopes are up for open access, removing gatekeepers and enabling text- and datamining, but there is more to that. Hopes are also up for faster publication rates of smaller research deliverables in other outlets than journals, by disseminating and claiming ideas in pre-print-proofs, preregistering the research design or hypothesis, store the findings with the processed/intermediate data along with the code/tool/app/container including the raw data used for processing, publish the final results with links to the data, and evaluate results and data in peer-review channels. Some examples of platforms that are built for “publish now, review later” - f1000research.com - , preregistering and sharing intermediate results - osf.io -,
For hopes related to storing data, have a look at re3data.org for a complete overview of data repositories worldwide where you can filter on subject, quality seals, persistency policy, reuse licenses and more.
Of course big-data lovers have hopes that peer reviewed results can be fuel to create knowlets and nanopublications, to create big-data knowledge graphs to improve scientific discovery methods, and emphasize research opportunities in networks across researchers. - nanopub.org -
Elsevier will loose marketshare
Publishing
Researchgate as primary tool
Archiving
Self-archiving, Open Access
Datasharing
Publication in the form of tools, apps to interpret data
Hopes are to make it easier to advertise and track your own work on social networks like researchgate, academia.edu, twitter, and linked-in. For managing and tracking your outlets see for example growkudos.com, or other online trainings and workshops.
Also the hopes are, in this phase to get support in making complex material easier to fathom by visualisations and animations.
Profile and Popular
Researchgate as primary tool
Hopes are to decouple peer-review from the publishing location or traditional journal, this makes a self-published and self-promoted a contribution to your field recognisable as a evaluated and valid part of the scientific corpus. At the same time there is hope that the effort researchers put into rigorous peer-reviewing the work of others gets recognised as well. This can be by open interactive discussion in the comment section, or in dedicated peer-review channels. For solutions in decoupling peer review and getting recognition: have for example a look at publons.com, peerageofscience.org, rubriq.com . For informal open discussions have a look at hypothes.is
For impact hopes are that impact will be not be on the quantity of publications one can crank out, but on the quality of the contribution to science. Earning the recognition could be your contribution on github for your code that gets re-used and forked, or the badges you have earned during different parts of your contributions to your research. See for this openresearchbadges.org, To yet quantify the alternatives for scientific impact, have a look at impactstory.org or altmetric.com
Review
Post publication peer review opportunities, because peer review fails
Impact
assessment on how to monitor impact
As a quick summary we hafe made a table to show the one most used tool per research phase in each discipline. A more detailed explanation is given in the section below the table, but for the one most used tool we can state the following:
Discovery: Most disciplines use Google Scholar to discover new literature. Medicine use PubMeb as their primary source for search. One could say that Lifesciences find having campus access to literature more important than searching for that literature, but in the detail section below we see a more elaborate explanation, where their attention for search is spread between Google Scholar and PubMed.
Analysis: MS Excel is the most popular tool for analysis in all disciplines, except for Medicine where they use SPSS.
Writing: Here MS Word is the most popular tool for writing in all disciplines, except for Engineering&Technology where they use LaTeX.
Publication: Pubishing in Traditional Topical journals is still by far the most popular publication method, despite of the high support for Open Access.
Outreach ResearchGate is the most popular platform for profiling your research within the research community, except for two disciplines who use RG slightly less. Engineering&Technology use Google Scholar Citations a bit more, and Arts&Humanities use Academia.edu more.
Assessment: Physics, Medicine, Lifesciences and Law use Web of Science for assessment of their research, and the oter disciplines use the Journal Citation Register, which both contain the same impact factor calculated from journals in the ISI database. Internationally there is a lot of debate going on if the merit of an article should count, and not the merit of the journal. Also discussed is the reward sysem to give credit where credit is due.
In the sections below we show the tool usage for each research discipline next to each other. This gives us the opportunity to see if there is a discipline usign a tool more or less than others.
Sharing the method or workflow for the analysis is nowadays common practice to be a part of the article. To have a separate platform for sharing the analysis to make the research more easily reproducible is not common practice yet. This might be because there is no honor in reproducing research, but in advancing science with new findings. But these platforms can also be used to pre-register a hypothesis and method. We see in Social science an more familiarity of the Open Science Framework where there is more attention to pre-registration of hypothesis and replication studies. Also in medicine there is a little bit of familiarity with a service like scientific protocols.
For the analysis Many disciplines use their specific tool for analysis. Excel is the common tool for at least 50 per cent of the Law and Arts&Humanities communities, and even more for the other disciplines. iPhython, R and Matlab are uses mostly by the Physics and Engineering&Technology, where R is also known by Life scientists. And SPSS is the commercial package that is used intensely in medicine, social-science&economics and life sciences. Unknow yet but interesting for digital humanities is the DHbox and R open science both with ready-to-go configurations of computational tools, the first as runtime environment in the cloud with R and iPython, the other an extensive software library for R. In the survey the following tools are mentioned by VU and VUmc researchers in different disciplines. Some of them were mentioned frequently like Nvivo or across disciplines like Atlas.TI.
In this overview we show the graphs focussed on each discipline in each research phase. Next to these bars we will place additional bars, where you can compare the discipline against the VU&VUmc average, and the OECD averages for that discipline.
legend for tool usage within disciplines compared to VU average and OECD average for that discipline
10.7 SocialSciencesEconomics (N= 176 )
10.7.1 Discovery
10.7.2 Analysis
10.7.3 Writing
10.7.4 Publication
10.7.5 Outreach
10.7.6 Assessment